126 research outputs found

    Instruction Set Architectures for Quantum Processing Units

    Full text link
    Progress in quantum computing hardware raises questions about how these devices can be controlled, programmed, and integrated with existing computational workflows. We briefly describe several prominent quantum computational models, their associated quantum processing units (QPUs), and the adoption of these devices as accelerators within high-performance computing systems. Emphasizing the interface to the QPU, we analyze instruction set architectures based on reduced and complex instruction sets, i.e., RISC and CISC architectures. We clarify the role of conventional constraints on memory addressing and instruction widths within the quantum computing context. Finally, we examine existing quantum computing platforms, including the D-Wave 2000Q and IBM Quantum Experience, within the context of future ISA development and HPC needs.Comment: To be published in the proceedings in the International Super Computing Conference 2017 publicatio

    Quantum Bootstrap Aggregation

    Get PDF
    We set out a strategy for quantizing attribute bootstrap aggregation to enable variance-resilient quantum machine learning. To do so, we utilise the linear decomposability of decision boundary parameters in the Rebentrost et al. Support Vector Machine to guarantee that stochastic measurement of the output quantum state will give rise to an ensemble decision without destroying the superposition over projective feature subsets induced within the chosen SVM implementation. We achieve a linear performance advantage, O(d), in addition to the existing O(log(n)) advantages of quantization as applied to Support Vector Machines. The approach extends to any form of quantum learning giving rise to linear decision boundaries

    (Pseudo) Random Quantum States with Binary Phase

    Full text link
    We prove a quantum information-theoretic conjecture due to Ji, Liu and Song (CRYPTO 2018) which suggested that a uniform superposition with random \emph{binary} phase is statistically indistinguishable from a Haar random state. That is, any polynomial number of copies of the aforementioned state is within exponentially small trace distance from the same number of copies of a Haar random state. As a consequence, we get a provable elementary construction of \emph{pseudorandom} quantum states from post-quantum pseudorandom functions. Generating pseduorandom quantum states is desirable for physical applications as well as for computational tasks such as quantum money. We observe that replacing the pseudorandom function with a (2t)(2t)-wise independent function (either in our construction or in previous work), results in an explicit construction for \emph{quantum state tt-designs} for all tt. In fact, we show that the circuit complexity (in terms of both circuit size and depth) of constructing tt-designs is bounded by that of (2t)(2t)-wise independent functions. Explicitly, while in prior literature tt-designs required linear depth (for t>2t > 2), this observation shows that polylogarithmic depth suffices for all tt. We note that our constructions yield pseudorandom states and state designs with only real-valued amplitudes, which was not previously known. Furthermore, generating these states require quantum circuit of restricted form: applying one layer of Hadamard gates, followed by a sequence of Toffoli gates. This structure may be useful for efficiency and simplicity of implementation

    Generating reversible circuits from higher-order functional programs

    Full text link
    Boolean reversible circuits are boolean circuits made of reversible elementary gates. Despite their constrained form, they can simulate any boolean function. The synthesis and validation of a reversible circuit simulating a given function is a difficult problem. In 1973, Bennett proposed to generate reversible circuits from traces of execution of Turing machines. In this paper, we propose a novel presentation of this approach, adapted to higher-order programs. Starting with a PCF-like language, we use a monadic representation of the trace of execution to turn a regular boolean program into a circuit-generating code. We show that a circuit traced out of a program computes the same boolean function as the original program. This technique has been successfully applied to generate large oracles with the quantum programming language Quipper.Comment: 21 pages. A shorter preprint has been accepted for publication in the Proceedings of Reversible Computation 2016. The final publication is available at http://link.springer.co

    The Road to Quantum Computational Supremacy

    Full text link
    We present an idiosyncratic view of the race for quantum computational supremacy. Google's approach and IBM challenge are examined. An unexpected side-effect of the race is the significant progress in designing fast classical algorithms. Quantum supremacy, if achieved, won't make classical computing obsolete.Comment: 15 pages, 1 figur

    De novo mutations in SMCHD1 cause Bosma arhinia microphthalmia syndrome and abrogate nasal development

    Get PDF
    Bosma arhinia microphthalmia syndrome (BAMS) is an extremely rare and striking condition characterized by complete absence of the nose with or without ocular defects. We report here that missense mutations in the epigenetic regulator SMCHD1 mapping to the extended ATPase domain of the encoded protein cause BAMS in all 14 cases studied. All mutations were de novo where parental DNA was available. Biochemical tests and in vivo assays in Xenopus laevis embryos suggest that these mutations may behave as gain-of-function alleles. This finding is in contrast to the loss-of-function mutations in SMCHD1 that have been associated with facioscapulohumeral muscular dystrophy (FSHD) type 2. Our results establish SMCHD1 as a key player in nasal development and provide biochemical insight into its enzymatic function that may be exploited for development of therapeutics for FSHD

    The Born supremacy: quantum advantage and training of an Ising Born machine

    Get PDF
    The search for an application of near-term quantum devices is widespread. Quantum Machine Learning is touted as a potential utilisation of such devices, particularly those which are out of the reach of the simulation capabilities of classical computers. In this work, we propose a generative Quantum Machine Learning Model, called the Ising Born Machine (IBM), which we show cannot, in the worst case, and up to suitable notions of error, be simulated efficiently by a classical device. We also show this holds for all the circuit families encountered during training. In particular, we explore quantum circuit learning using non-universal circuits derived from Ising Model Hamiltonians, which are implementable on near term quantum devices. We propose two novel training methods for the IBM by utilising the Stein Discrepancy and the Sinkhorn Divergence cost functions. We show numerically, both using a simulator within Rigetti's Forest platform and on the Aspen-1 16Q chip, that the cost functions we suggest outperform the more commonly used Maximum Mean Discrepancy (MMD) for differentiable training. We also propose an improvement to the MMD by proposing a novel utilisation of quantum kernels which we demonstrate provides improvements over its classical counterpart. We discuss the potential of these methods to learn `hard' quantum distributions, a feat which would demonstrate the advantage of quantum over classical computers, and provide the first formal definitions for what we call `Quantum Learning Supremacy'. Finally, we propose a novel view on the area of quantum circuit compilation by using the IBM to `mimic' target quantum circuits using classical output data only.Comment: v3 : Close to journal published version - significant text structure change, split into main text & appendices. See v2 for unsplit version; v2 : Typos corrected, figures altered slightly; v1 : 68 pages, 39 Figures. Comments welcome. Implementation at https://github.com/BrianCoyle/IsingBornMachin

    Variation analysis and gene annotation of eight MHC haplotypes: The MHC Haplotype Project

    Get PDF
    The human major histocompatibility complex (MHC) is contained within about 4 Mb on the short arm of chromosome 6 and is recognised as the most variable region in the human genome. The primary aim of the MHC Haplotype Project was to provide a comprehensively annotated reference sequence of a single, human leukocyte antigen-homozygous MHC haplotype and to use it as a basis against which variations could be assessed from seven other similarly homozygous cell lines, representative of the most common MHC haplotypes in the European population. Comparison of the haplotype sequences, including four haplotypes not previously analysed, resulted in the identification of >44,000 variations, both substitutions and indels (insertions and deletions), which have been submitted to the dbSNP database. The gene annotation uncovered haplotype-specific differences and confirmed the presence of more than 300 loci, including over 160 protein-coding genes. Combined analysis of the variation and annotation datasets revealed 122 gene loci with coding substitutions of which 97 were non-synonymous. The haplotype (A3-B7-DR15; PGF cell line) designated as the new MHC reference sequence, has been incorporated into the human genome assembly (NCBI35 and subsequent builds), and constitutes the largest single-haplotype sequence of the human genome to date. The extensive variation and annotation data derived from the analysis of seven further haplotypes have been made publicly available and provide a framework and resource for future association studies of all MHC-associated diseases and transplant medicine

    Towards the fast scrambling conjecture

    Get PDF
    Many proposed quantum mechanical models of black holes include highly nonlocal interactions. The time required for thermalization to occur in such models should reflect the relaxation times associated with classical black holes in general relativity. Moreover, the time required for a particularly strong form of thermalization to occur, sometimes known as scrambling, determines the time scale on which black holes should start to release information. It has been conjectured that black holes scramble in a time logarithmic in their entropy, and that no system in nature can scramble faster. In this article, we address the conjecture from two directions. First, we exhibit two examples of systems that do indeed scramble in logarithmic time: Brownian quantum circuits and the antiferromagnetic Ising model on a sparse random graph. Unfortunately, both fail to be truly ideal fast scramblers for reasons we discuss. Second, we use Lieb-Robinson techniques to prove a logarithmic lower bound on the scrambling time of systems with finite norm terms in their Hamiltonian. The bound holds in spite of any nonlocal structure in the Hamiltonian, which might permit every degree of freedom to interact directly with every other one.Comment: 34 pages. v2: typo correcte

    Pooled extracellular receptor-ligand interaction screening using CRISPR activation.

    Get PDF
    Extracellular interactions between cell surface receptors are necessary for signaling and adhesion but identifying them remains technically challenging. We describe a cell-based genome-wide approach employing CRISPR activation to identify receptors for a defined ligand. We show receptors for high-affinity antibodies and low-affinity ligands can be unambiguously identified when used in pools or as individual binding probes. We apply this technique to identify ligands for the adhesion G-protein-coupled receptors and show that the Nogo myelin-associated inhibitory proteins are ligands for ADGRB1. This method will enable extracellular receptor-ligand identification on a genome-wide scale
    • 

    corecore